Possible Minds: 25 Ways of Looking at AI

Possible Minds Brockman, founder of the online salon Edge.org, corrals 25 big brains—ranging from Nobel Prize-winning physicist Frank Wilczek to roboticist extraordinaire Rodney Brooks—to opine on this exhilarating, terrifying future. “Artificial intelligence is today’s story–the story behind all other stories. It is the Second Coming and the Apocalypse at the same time: Good AI versus evil AI.” –John Brockman. More than sixty years ago, mathematician-philosopher Norbert Wiener published a book on the place of machines in society that ended with a warning: “we shall never receive the right answers to our questions unless we ask the right questions…. The hour is very late, and the choice of good and evil knocks at our door.” In the wake of advances in unsupervised, self-improving machine learning, a small but influential community of thinkers is considering Wiener’s words again. In Possible Minds, John Brockman gathers their disparate visions of where AI might be taking us. The fruit of the long history of Brockman’s profound engagement with the most important scientific minds who have been thinking about AI–from Alison Gopnik and David Deutsch to Frank Wilczek and Stephen Wolfram–Possible Minds is an ideal introduction to the landscape of crucial issues AI presents. The collision between opposing perspectives is salutary and exhilarating; some of these figures, such as computer scientist Stuart Russell, Skype co-founder Jaan Tallinn, and physicist Max Tegmark, are deeply concerned with the threat of AI, including the existential one, while others, notably robotics entrepreneur Rodney Brooks, philosopher Daniel Dennett, and bestselling author Steven Pinker, have a very different view. Serious, searching and authoritative, Possible Minds lays out the intellectual landscape of one of the most important topics of our time.

image-left

In one of his well crafted Edge.com books, John Brockman assembles twenty-five of hte most important scentific minds, who have been contemplating the field of artifical intelligence. Brockman planted the seed of the conversation starting with the book published by mathematician-philosopher Norbert Wiener more than sixty years ago “Cybernetics: Or Control and Communication in the Animal and the Machine” on the place of machines in society. that ended with a warning: “we shall never receive the right answers to our questions unless we ask the right questions…. The hour is very late, and the choice of good and evil knocks at our door.” In the wake of advances in unsupervised, self-improving machine learning, a small but influential community of thinkers is considering Wiener’s words again. In Possible Minds , John Brockman gathers their disparate visions of where AI might be taking us.

image-left

From left: W. Daniel Hillis, Neil Gershenfeld, Frank Wilczek, David Chalmers, Robert Axelrod, Tom Griffiths, Caroline Jones, Peter Galison, Alison Gopnik, John Brockman, George Dyson, Freeman Dyson, Seth Lloyd, Rod Brooks, Stephen Wolfram, Ian McEwan. In absentia: Andy Clark, George M. Church, Daniel Kahneman, Alex "Sandy" Pentland, Venki Ramakrishnan

Possible Mind Conference

Key Ideas

  • Education is as hard and slow for computers as it is for teenagers. Consequently, systems based on deep learning are becoming more rather than less human. The skills they bring to learning are not “better than” but “complementary to” human learning: Computer learning systems can identify patterns that humans cannot—and vice versa. The world’s best chess players are neither computers nor humans but humans working together with computers.

  • Current machine-learning systems operate almost exclusively in a statistical, or model-blind, mode, which is analogous in many ways to fitting a function to a cloud of data points. Such systems cannot reason about “What if?” questions and, therefore, cannot serve as the basis for Strong AI—that is, artificial intelligence that emulates human-level reasoning and competence. What humans had that other species lacked was a mental representation of their environment—a representation that they could manipulate at will to imagine alternative hypothetical environments for planning and learning.

  • The decisive ingredient that gave our ancestors the ability to achieve global dominion about forty thousand years ago was their ability to create and store a mental representation of their environment, interrogate that representation, distort it by mental acts of imagination, and finally answer the “What if?” kinds of questions.

  • Human-level AI cannot emerge solely from model-blind learning machines; it requires the symbiotic collaboration of data and models.

  • Data science is a science only to the extent that it facilitates the interpretation of data—a two-body problem, connecting data to reality. Data alone are hardly a science, no matter how “big” they get and how skillfully they are manipulated. Opaque learning systems may get us to Babylon, but not to Athens.

  • Unfortunately, neither AI nor other disciplines (economics, statistics, control theory, operations research) built around the optimization of objectives have much to say about how to identify the purposes “we really desire.” Instead, they assume that objectives are simply implanted into the machine.

  • A more precise definition is given by the framework of cooperative inverse-reinforcement learning, or CIRL. A CIRL problem involves two agents, one human and the other a robot. Because there are two agents, the problem is what economists call a game. It is a game of partial information, because while the human knows the reward function, the robot doesn’t—even though the robot’s job is to maximize it. A robot that’s uncertain about human preferences actually benefits from being switched off, because it understands that the human will press the off switch to prevent the robot from doing something counter to those preferences. Thus the robot is incentivized to preserve the off switch, and this incentive derives directly from its uncertainty about human preferences.

  • Finding a solution to the AI control problem is an important task; it may be, in Bostrom’s words, “the essential task of our age.”

  • Alan Turing wondered what it would take for machines to become intelligent. John von Neumann wondered what it would take for machines to self-reproduce. Claude Shannon wondered what it would take for machines to communicate reliably, no matter how much noise intervened. Norbert Wiener wondered how long it would take for machines to assume control.

  • System simple enough to be understandable will not be complicated enough to behave intelligently, while any system complicated enough to behave intelligently will be too complicated to understand.

  • The hidden premise in Turing’s almost-argument was: Only a conscious, intelligent agent could devise and control a winning strategy in the imitation game. And so it was persuasive to Turing (and others, including me, still a stalwart defender of the Turing Test) to argue that a “computing machine” that could pass as human in a contest with a human might not be conscious in just the way a human being is, but would nevertheless have to be a conscious agent of some kind. I think this is still a defensible position—the only defensible position—but you have to understand how resourceful and ingenious a judge would have to be to expose the shallowness of the façade that a deep-learning AI (a tool, not a colleague) could present.

  • Once we recognize that people are starting to make life-or-death decisions largely on the basis of “advice” from AI systems whose inner operations are unfathomable in practice, we can see a good reason why those who in any way encourage people to put more trust in these systems than they warrant should be held morally and legally accountable.

  • Humankind has gotten itself into a fine pickle: We are being exploited by companies that paradoxically deliver services we crave, and at the same time our lives depend on many software-enabled systems that are open to attack. Getting ourselves out of this mess will be a long-term project. It will involve engineering, legislation, and, most important, moral leadership. Moral leadership is the first and biggest challenge.

  • Human mind emerges from matter. Matter is what physics says it is. Therefore, the human mind emerges from physical processes we understand and can reproduce artificially. Therefore, natural intelligence is a special case of artificial intelligence.

  • Consciousness is the cosmic awakening; it transformed our Universe from a mindless zombie with no self-awareness into a living ecosystem harboring self-reflection, beauty, hope, meaning, and purpose. Had that awakening never taken place, our Universe would have been pointless—a gigantic waste of space. Should our Universe go back to sleep permanently due to some cosmic calamity or self-inflicted mishap, it will become meaningless again.

  • Intelligence is simply a certain kind of information processing performed by elementary particles moving around, and there’s no law of physics that says one can’t build machines more intelligent in every way than we are, and able to seed cosmic life. Intelligence isn’t good or evil but morally neutral. It’s simply an ability to accomplish complex goals, good or bad.

  • The existence of affordable AGI means, by definition, that all jobs can be done more cheaply by machines, so anyone claiming that “people will always find new well-paying jobs” is in effect claiming that AI researchers will fail to build AGI.

  • If we can amplify our own intelligence with AGI, we have the potential to solve today’s and tomorrow’s thorniest problems, including disease, climate change, and poverty. The more detailed we can make our shared positive visions for the future, the more motivated we will be to work together to realize them.

  • AI safety research must be carried out with a strict deadline in mind: Before AGI arrives, we need to figure out how to make AI understand, adopt, and retain our goals. the real risk with AGI isn’t malice but competence. A superintelligent AGI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.

  • There’s no meaning encoded in the laws of physics, so instead of passively waiting for our Universe to give meaning to us, let’s acknowledge and celebrate that it’s we conscious beings who give meaning to our Universe. Let’s create our own meaning, based on something more profound than having jobs. AGI can enable us to finally become the masters of our own destiny.

  • The AI-risk deniers often have financial or other pragmatic motives. One of the leading motives is corporate profits. AI is profitable, and even in instances where it isn’t, it’s at least a trendy, forward-looking enterprise with which to associate your company.

  • Knowledge can be explained as patterns in matter or energy that stand in systematic relations with states of the world, with mathematical and logical truths, and with one another. Reasoning can be explained as transformations of that knowledge by physical operations that are designed to preserve those relations. Purpose can be explained as the control of operations to effect changes in the world, guided by discrepancies between its current state and a goal state.

  • Despite the progress in machine learning, particularly multilayered artificial neural networks, current AI systems are nowhere near achieving general intelligence (if that concept is even coherent). Instead, they are restricted to problems that consist of mapping well-defined inputs to well-defined outputs in domains where gargantuan training sets are available, in which the metric for success is immediate and precise, in which the environment doesn’t change, and in which no stepwise, hierarchical, or abstract reasoning is necessary. Even if an artificial intelligence system tried to exercise a will to power, without the cooperation of humans it would remain an impotent brain in a vat. When we put aside fantasies like digital megalomania, instant omniscience, and perfect knowledge and control of every particle in the universe, artificial intelligence is like any other technology. It is developed incrementally, designed to satisfy multiple conditions, tested before it is implemented, and constantly tweaked for efficacy and safety.

Cybernetics: Or Control and Communication in the Animal and the Machine.

  • All thinking is a form of computation, and any computer whose repertoire includes a universal set of elementary operations can emulate the computations of any other. Hence human brains can think anything that AGIs can, subject only to limitations of speed or memory capacity, both of which can be equalized by technology.

  • Value alignment is the subject of a small but growing literature in artificial-intelligence research. One of the tools used for solving this problem is inverse-reinforcement learning. Reinforcement learning is a standard method for training intelligent machines. By associating particular outcomes with rewards, a machine-learning system can be trained to follow strategies that produce those outcomes. Inverse reinforcement learning turns this approach around: By observing the actions of an intelligent agent that has already learned effective strategies, we can infer the rewards that led to the development of those strategies. Inverse reinforcement learning is a statistical problem: We have some data—the behavior of an intelligent agent—and we want to evaluate various hypotheses about the rewards underlying that behavior.

  • The trade-off between making mistakes and thinking too much that characterizes human cognition is a trade-off faced by any real-world intelligent agent. Human beings are an amazing example of systems that act intelligently despite significant computational constraints. We’re quite good at developing strategies that allow us to solve problems pretty well without working too hard. Understanding how we do this will be a step toward making computers work smarter, not harder.

  • We need to enable robots to reason about us—to see us as something more than obstacles or perfect game players. We need them to take our human nature into account, so that they are well coordinated and well aligned with us. If we succeed, we will indeed have tools that substantially increase our quality of life.

  • We live in a world of countless gradients, from light and heat to gravity and chemical trails (chemtrails!). Water flows along a gravity gradient downhill, and your body lives on chemical solutions flowing across cell membranes from high concentration to low. Every action in the universe is driven by some gradient drive, from the movement of the planets around gravity gradients to the joining of atoms along electric-charge gradients to form molecules. Our own urges, such as hunger and sleepiness, are driven by electrochemical gradients in our bodies. The natural state of all systems is to flow to lower energy states, a process that is broadly described by entropy (the tendency of things to go from ordered to disordered states; all things will fall apart eventually, including the universe itself). our brains operate the same way as any other complex system with layers and feedback loops, all pursuing what we mathematically call “optimization functions” but you could just as well call “flowing downhill” in some sense. The essence of intelligence is learning, and we do that by correlating inputs with positive or negative scores (rewards or punishments).

  • While Google Books may help circulate hundreds of thousands of works of literature for free, Google itself—like Facebook, Amazon, Twitter, and their many imitators—has commandeered a baser form of “information” and exploited it for extraordinary profit. Petabytes of Shannon-like information—a seemingly meaningless stream of clicks, “likes,” and retweets, collected from virtually every person who has ever touched a networked computer—are sifted through proprietary “deep-learning” algorithms to microtarget everything from the advertisements we see to the news stories (fake or otherwise) we encounter while browsing the Web.

  • The “deep” part of deep learning refers not to the (hoped-for) depth of insight but to the depth of the mathematical network layers used to make predictions. It turned out that a linear increase in network complexity led to an exponential increase in the expressive power of the network. The solution to the curse of dimensionality came in using information about the problem to constrain the search. The search algorithms themselves are not new. But when applied to a deep-learning network, they adaptively build up representations of where to search. Neural networks started out with a goal of modeling how the brain works. That goal was abandoned as they evolved into mathematical abstractions unrelated to how neurons actually function. But now there’s a kind of convergence that can be thought of as forward-rather than reverse-engineering biology, as the results of deep learning echo brain layers and regions.

  • The mother of all design problems is the one that resulted in us. The way we’re designed resides in one of the oldest and most conserved parts of the genome, called the Hox genes. These are genes that regulate genes, in what are called developmental programs. Nothing in your genome stores the design of your body; your genome stores, rather, a series of steps to follow that results in your body. This is an exact parallel to how search is done in AI. There are too many possible body plans to search over, and most modifications would be either inconsequential or fatal. The Hox genes are a representation of a productive place for evolutionary search. It’s a kind of natural intelligence at the molecular level.

  • Although machine making and machine thinking might appear to be unrelated trends, they lie in each other’s futures. The same scaling trends that have made AI possible suggest that the current mania is a phase that will pass, to be followed by something even more significant: the merging of artificial and natural intelligence.

  • Today, the most powerful and intelligent collections of machines are probably owned by Google, but companies like Amazon, Baidu, Microsoft, Facebook, Apple, and IBM may not be far behind. These companies all see a business imperative to build artificial intelligences of their own. It is easy to imagine a future in which corporations independently build their own machine intelligences, protected within firewalls preventing the machines from taking advantage of one another’s knowledge. These machines will be designed to have goals aligned with those of the corporation. The attitude of a self-interested super-AI toward hybrid superintelligences is likely to be competitive. Humans might be seen as minor annoyances, like ants at a picnic, but hybrid superintelligences—like corporations, organized religions, and nation-states—could be existential threats. Like hybrid superintelligences, AIs might see humans mostly as useful tools to accomplish their goals, as pawns in their competition with the other superintelligences. Or we might simply be irrelevant.

  • The final scenario is that machine intelligences will not be allied with one another but instead will work to further the goals of humanity as a whole. In this optimistic scenario, AI could help us restore the balance of power between the individual and the corporation, between the citizen and the state. It could help us solve the problems that have been created by hybrid superintelligences that subvert the goals of humans.

  • The question of who owns data about us will be paramount. Moreover, data-based decisions will undoubtedly reflect social biases: Even an allegedly neutral intelligent system designed to predict loan risks, say, may conclude that mere membership in a particular minority group makes you more likely to default on a loan. While this is an obvious example that we could correct, the real danger is that we are not always aware of biases in the data and may simply perpetuate them. The fight between dominant companies today is really a fight for control over our data. They will use their enormous influence to prevent regulation of data, because their interests lie in unfettered control of it. Moreover, they have the financial resources to hire the most talented workers in the field, enhancing their power even further.

  • Anthropologist and anarchist David Graeber describes as the growth of “bullshit jobs.” While jobs that produce essentials like food, shelter, and goods have been largely automated away, we have seen an enormous expansion of sectors like corporate law, academic and health administration (as opposed to actual teaching, research, and the practice of medicine), “human resources,” and public relations, not to mention new industries like financial services and telemarketing and ancillary industries in the so-called gig economy that serve those who are too busy doing all that additional work.

  • People are scared about AI. Perhaps they should be. But they need to realize that AI feeds on data. Without data, AI is nothing. You don’t have to watch the AI; instead you should watch what it eats and what it does. Current AI machine-learning algorithms are, at their core, dead simple stupid. They work, but they work by brute force, so they need hundreds of millions of samples. They work because you can approximate anything with lots of little simple pieces.

  • Since Aristotle and Plato, there have been two basic ways of addressing the problem of how we know what we know, and they are still the main approaches in machine learning. Aristotle approached the problem from the bottom up: Start with senses—the stream of photons and air vibrations (or the pixels or sound samples of a digital image or recording)—and see if you can extract patterns from them. This approach was carried further by such classic associationists as philosophers David Hume and J. S. Mill and later by behavioral psychologists, like Pavlov and B. F. Skinner. On this view, the abstractness and hierarchical structure of representations is something of an illusion, or at least an epiphenomenon. All the work can be done by association and pattern detection—especially if there are enough data.Plato’s alternative, top-down one. Maybe we get abstract knowledge from concrete data because we already know a lot, and especially because we already have an array of basic abstract concepts, thanks to evolution. Like scientists, we can use those concepts to formulate hypotheses about the world. Then, instead of trying to extract patterns from the raw data, we can make predictions about what the data should look like if those hypotheses are right. Along with Plato, such “rationalist” philosophers and psychologists as Descartes and Noam Chomsky took this approach. These two approaches to machine learning have complementary strengths and weaknesses. In the bottom-up approach, the program doesn’t need much knowledge to begin with, but it needs a great deal of data, and it can generalize only in a limited way. In the top-down approach, the program can learn from just a few examples and make much broader and more varied generalizations, but you need to build much more into it to begin with. A number of investigators are currently trying to combine the two approaches, using deep learning to implement Bayesian inference.

  • As the convergence of algorithms and Big Data governs a greater and greater part of our lives, it would be well worth keeping in mind these two lessons from the history of the sciences: Judgment is not the discarded husk of a now pure objectivity of self-restraint. And mechanical objectivity is a virtue competing among others, not the defining essence of the scientific enterprise. They are lessons to bear in mind, even if algorists dream of objectivity.

  • The line between human and machines blurs, both because machines become more humanlike and because humans become more machinelike—not only since we increasingly blindly follow GPS scripts, reflex tweets, and carefully crafted marketing, but also as we digest ever more insights into our brain and genetic programming mechanisms.

  • The science-fiction prophet William Gibson said, “The future is already here—it’s just not very evenly distributed.” While this underestimates the next round of “future,” certainly millions of us are transhuman already—with most of us asking for more. The question “What was a human?” has already transmogrified into “What were the many kinds of transhumans? . . . And what were their rights?”

About the Author

John Brockman With a broad career spanning the fields of art, science, books, software and the Internet. In 1960 he established the bases for “intermedia kinetic environments” in art, theatre and commerce, while consulting for clients such as General Electric, Columbia Pictures, The Pentagon, The White House… In 1973 he formed his own literary and software agency. He is founder of the Edge Foundation and editor of Edge, a highly acclaimed website where the most outstanding thinkers, leaders of what he has termed “Third Culture”, analyse cutting-edge science. He is author and editor of several books, including: The Third Culture (1995); The Greatest Inventions of the Past 2000 Years (2000); The Next Fifty Years (2002) and The New Humanists (2003). He has the distinction of being the only person to have been profiled on Page One of the “Science Times” (1997) and the “Arts & Leisure” (1966), both supplements of The New York Times

Leave a Comment